59 research outputs found

    Social robot deception and the culture of trust

    Get PDF
    Human beings are deeply social, and both evolutionary traits and cultural constructs encourage cooperation based on trust. Social robots interject themselves in human social settings, and they can be used for deceptive purposes. Robot deception is best understood by examining the effects of deception on the recipient of deceptive actions, and I argue that the long-term consequences of robot deception should receive more attention, as it has the potential to challenge human cultures of trust and degrade the foundations of human cooperation. In conclusion: regulation, ethical conduct by producers, and raised general awareness of the issues described in this article are all required to avoid the unfavourable consequences of a general degradation of trust.publishedVersio

    The foundations of a policy for the use of social robots in care

    Get PDF
    Should we deploy social robots in care settings? This question, asked from a policy standpoint, requires that we understand the potential benefits and downsides of deploying social robots in care situations. Potential benefits could include increased efficiency, increased welfare, physiological and psychological benefits, and experienced satisfaction. There are, however, important objections to the use of social robots in care. These include the possibility that relations with robots can potentially displace human contact, that these relations could be harmful, that robot care is undignified and disrespectful, and that social robots are deceptive. I propose a framework for evaluating all these arguments in terms of three aspects of care: structure, process, and outcome. I then highlight the main ethical considerations that have to be made in order to untangle the web of pros and cons of social robots in care as these pros and cons are related the trade-offs regarding quantity and quality of care, process and outcome, and objective and subjective outcomes.publishedVersio

    The Parasitic Nature of Social AI: Sharing Minds with the Mindless

    Get PDF
    Can artificial intelligence (AI) develop the potential to be our partner, and will we be as sensitive to its social signals as we are to those of human beings? I examine both of these questions and how cultural psychology might add such questions to its research agenda. There are three areas in which I believe there is a need for both a better understanding and added perspective. First, I will present some important concepts and ideas from the world of AI that might be beneficial for pursuing research topics focused on AI within the cultural psychology research agenda. Second, there are some very interesting questions that must be answered with respect to central notions in cultural psychology as these are tested through human interactions with AI. Third, I claim that social robots are parasitic to deeply ingrained human social behaviour, in the sense that they exploit and feed upon processes and mechanisms that evolved for purposes that were originally completely alien to human-computer interactions.publishedVersio

    Confounding Complexity of Machine Action: A Hobbesian Account of Machine Responsibility

    Get PDF
    In this article, the core concepts in Thomas Hobbes’s framework of representation and responsibility are applied to the question of machine responsibility and the responsibility gap and the retribution gap. The method is philosophical analysis and involves the application of theories from political theory to the ethics of technology. A veil of complexity creates the illusion that machine actions belong to a mysterious and unpredictable domain, and some argue that this unpredictability absolves designers of responsibility. Such a move would create a moral hazard related to both (a) strategically increasing unpredictability and (b) taking more risk if responsible humans do not have to bear the costs of the risks they create. Hobbes’s theory allows for the clear and arguably fair attribution of action while allowing for necessary development and innovation. Innovation will be allowed as long as it is compatible with social order and provided the beneficial effects outweigh concerns about increased risk. Questions of responsibility are here considered to be political questions.publishedVersio

    The ethics of trading privacy for security: The multifaceted effects of privacy on liberty and security

    Get PDF
    A recurring question in political philosophy is how to understand and analyse the trade-off between security and liberty. With modern technology, however, it is possible to argue that the former trade-off can be exchanged with the trade-off between security and privacy. I focus on the ethical considerations involved in the trade-off between privacy and security in relation to policy formation. Firstly, different conceptions of liberty entail different functions of privacy. Secondly, privacy and liberty form a complex and interdependent relationship with security. Some security is required for privacy and liberty to have value, but attempting to increase security beyond the required level will erode the value of both, and in turn threaten security. There is no simple balance between any of the concepts, as all three must be considered, and their relationships are complex. This necessitates a pluralistic theoretical approach in order to evaluate policymaking related to the proposed trade of privacy for security.publishedVersio

    A shallow defence of a technocracy of artificial intelligence: Examining the political harms of algorithmic governance in the domain of government

    Get PDF
    Artificial intelligence (AI) has proven to be superior to human decision-making in certain areas. This is partic-ularly the case whenever there is a need for advanced strategic reasoning and analysis of vast amounts of data in order to solve complex problems. Few human activities fit this description better than politics. In politics we deal with some of the most complex issues humans face, short-term and long-term consequences have to be balanced, and we make decisions knowing that we do not fully understand their consequences. I examine an extreme case of the application of AI in the domain of government, and use this case to examine a subset of the potential harms associated with algorithmic governance. I focus on five objections based on political theoretical considerations and the potential political harms of an AI technocracy. These are objections based on the ideas of ‘political man’ and participation as a prerequisite for legitimacy, the non-morality of machines and the value of transparency and accountability. I conclude that these objections do not successfully derail AI technocracy, if we make sure that mechanisms for control and backup are in place, and if we design a system in which humans have control over the direction and fundamental goals of society. Such a technocracy, if the AI capabilities of policy formation here assumed becomes reality, may, in theory, provide us with better means of participation, legitimacy, and more efficient government.publishedVersio

    AI in Context and the Sustainable Development Goals: Factoring in the Unsustainability of the Sociotechnical System

    Get PDF
    Artificial intelligence (AI) is associated with both positive and negative impacts on both people and planet, and much attention is currently devoted to analyzing and valuating these impacts. In 2015, the UN set 17 Sustainable Development Goals (SDGs), consisting of environmental, social, and economic goals. This article shows how the SDGs provide a novel and useful framework for analyzing and categorizing the benefits and harms of AI. AI is here considered in context as part of a sociotechnical system consisting of larger structures and economic and political systems, rather than as a simple tool that can be analyzed in isolation. This article distinguishes between direct and indirect effects of AI and divides the SDGs into five groups based on the kinds of impact AI has on them. While AI has great positive potential, it is also intimately linked to nonuniversal access to increasingly large data sets and the computing infrastructure required to make use of them. As a handful of nations and companies control the development and application of AI, this raises important questions regarding the potential negative implications of AI on the SDGs. The conceptual framework here presented helps structure the analysis of which of the SDGs AI might be useful in attaining and which goals are threatened by the increased use of AI.publishedVersio

    The tyranny of perceived opinion: Freedom and information in the era of big data

    Get PDF
    Never before have we had access to as much information as we do today, but how do we avail ourselves of it? In parallel with the increase in the amount of information, we have created means of curating and delivering it in sophisticated ways, through the technologies of algorithms, Big Data and artificial intelligence. I examine how information is curated, and how digital technology has led to the creation of filter bubbles, while simultaneously creating closed online spaces in which people of similar opinions can congregate – echo chambers. These phenomena partly stem from our tendency towards selective exposure – a tendency to seek information that supports pre-existing beliefs, and to avoid unpleasant information. This becomes a problem when the information and the suggestions we receive, and the way we are portrayed creates expectations, and thus becomes leading. When the technologies I discuss are employed as they are today, combined with human nature, they pose a threat to liberty by undermining individuality, autonomy and the very foundation of liberal society. Liberty is an important part of our image of the good society, and this article is an attempt to analyse one way in which applications of technology can be detrimental to our society. While Alexis De Tocqueville feared the tyranny of the majority, we would do well to fear the tyranny of the algorithms and perceived opinion.publishedVersio

    Challenging the Neo-Anthropocentric Relational Approach to Robot Rights

    Get PDF
    When will it make sense to consider robots candidates for moral standing? Major disagreements exist between those who find that question important and those who do not, and also between those united in their willingness to pursue the question. I narrow in on the approach to robot rights called relationalism, and ask: if we provide robots moral standing based on how humans relate to them, are we moving past human chauvinism, or are we merely putting a new dress on it? The background for the article is the clash between those who argue that robot rights are possible and those who see a fight for robot rights as ludicrous, unthinkable, or just outright harmful and disruptive for humans. The latter group are by some branded human chauvinists and anthropocentric, and they are criticized and portrayed as backward, unjust, and ignorant of history. Relationalism, in contrast, purportedly opens the door for considering robot rights and moving past anthropocentrism. However, I argue that relationalism is, quite to the contrary, a form of neo-anthropocentrism that recenters human beings and their unique ontological properties, perceptions, and values. I do so by raising three objections: 1) relationalism centers human values and perspectives, 2) it is indirectly a type of properties-based approach, and 3) edge cases reveal potentially absurd implications in practice.publishedVersio

    Robotomorphy: Becoming our creations

    Get PDF
    Humans and gods alike have since the dawn of time created objects in their own image. From clay fgures and wooden toys—some granted life in myths and movies but also dead representations of their creators—to modern-day robots that mimic their creators in more than appearance. These objects tell the story of how we perceive ourselves, and in this article, I examine how they also change us. Robotomorphy describes what occurs when we project the characteristics and capabilities of robots onto ourselves, to make sense of the complicated and mysterious beings that we are. Machines are, after all, relatively comprehensible and help dispel the discomfort associated with complex human concepts such as consciousness, free will, the soul, etc. I then argue that using robots as the mirror image by which we understand ourselves entails an unfortunate reductionism. When robots become the blueprint for humanity, they simultaneously become benchmarks and ideals to live up to, and suddenly the things we make are no longer representations of ourselves, but we of them. This gives rise to a recursive process in which the mirror mirrors itself and infuences both the trajectory for machine development and human self-perception.publishedVersio
    • …
    corecore